Skip to main content
Cornell University
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate
arxiv logo > cs > arXiv:1911.02969

Help | Advanced Search

Computer Science > Computation and Language

(cs)
[Submitted on 7 Nov 2019 (v1), last revised 16 Nov 2020 (this version, v2)]

Title:BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance

Authors:R. Thomas McCoy, Junghyun Min, Tal Linzen
View a PDF of the paper titled BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance, by R. Thomas McCoy and Junghyun Min and Tal Linzen
View PDF
Abstract:If the same neural network architecture is trained multiple times on the same dataset, will it make similar linguistic generalizations across runs? To study this question, we fine-tuned 100 instances of BERT on the Multi-genre Natural Language Inference (MNLI) dataset and evaluated them on the HANS dataset, which evaluates syntactic generalization in natural language inference. On the MNLI development set, the behavior of all instances was remarkably consistent, with accuracy ranging between 83.6% and 84.8%. In stark contrast, the same models varied widely in their generalization performance. For example, on the simple case of subject-object swap (e.g., determining that "the doctor visited the lawyer" does not entail "the lawyer visited the doctor"), accuracy ranged from 0.00% to 66.2%. Such variation is likely due to the presence of many local minima that are equally attractive to a low-bias learner such as a neural network; decreasing the variability may therefore require models with stronger inductive biases.
Comments: 11 pages, 7 figures; accepted to the 2020 BlackboxNLP workshop
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:1911.02969 [cs.CL]
  (or arXiv:1911.02969v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.1911.02969
arXiv-issued DOI via DataCite

Submission history

From: Tom McCoy [view email]
[v1] Thu, 7 Nov 2019 16:20:40 UTC (178 KB)
[v2] Mon, 16 Nov 2020 18:02:49 UTC (209 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled BERTs of a feather do not generalize together: Large variability in generalization across models with similar test set performance, by R. Thomas McCoy and Junghyun Min and Tal Linzen
  • View PDF
  • TeX Source
view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

listing | bibtex
R. Thomas McCoy
Tal Linzen
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status